System Message Free until April 26th, 2025

AI Toolset—Jupyter Notebook

Getting Started with Jupyter Lab

After setting up your lab environment, you are ready to explore Jupyter Lab. Curious about the features that set Jupyter Lab apart from simple text editors, which mainly offer syntax highlighting, you begin reviewing the documentation on the official website.

You learn that Jupyter Lab is a flexible, web-based IDE designed for data science, machine learning, and other coding tasks. It provides a streamlined interface for writing, running, and visualizing code in various programming languages, but it is best suited for Python. You also discover that while simple code editors enable you to write code and comments—provided they are supported by the programming language you use—they do not allow you to combine formatted text with executable code in the same file. In contrast, Jupyter Lab utilizes Jupyter Notebooks, which let you integrate regular text with executable code blocks.

Jupyter Notebooks are structured data that represent your code, metadata, content, and outputs. When you save a notebook to disk, it uses the extension .ipynb, and uses a JavaScript Object Notation (JSON) structure. On the left side of the figure you can see the Jupyter Notebook file, while on the right side, you can see that same file in its raw JSON form. The title and first complementary text are in markdown, which is denoted in the JSON notation as "cell_type": "markdown".

The other parts of the notebook are similarly denoted in JSON. Besides markdown, you can also use simple raw text. The example also shows a simple Python function, prefixed with a [1]:. These squared brackets mark code blocks and the current execution count. The number one means it is the first code block that was run. Below you can see the actual function call, prefixed by [2]. Finally, below the function call, you can see the output, which is in this example "Hello from Python". In the JSON structure, at the bottom of the figure, you can see the source key with the value set to the actual function call that produced the output. It is useful if you want to map the outputs to their relevant function calls.

You decide to create a notebook and a simple Python function that will display the IP addresses of the configured interfaces on the Cisco Nexus switch in your environment.

Step 1

Open the terminal by clicking on the terminal icon or by pressing Ctrl+Alt+t.

Answer

The terminal icon is in the taskbar at the bottom.

You should see the terminal.

Step 2

Change the directory to jupyter and run Jupyter Lab from that directory by issuing the jupyter lab command.

Answer

Running Jupyter Lab from the jupyter directory sets the the jupyter directory as the root for all your notebooks, which you can see highlighted in the output. After the server fully loads, it also provides information on how to access the application via a URL, highlighted in the output. Your output may be a bit different compared to the one in the lab guide.

student@student-vm:~$ cd jupyter
student@student-vm:~/jupyter$ jupyter lab
[I 2024-09-19 08:40:33.805 ServerApp] jupyter_ai | extension was successfully linked.
[I 2024-09-19 08:40:33.805 ServerApp] jupyter_lsp | extension was successfully linked.
[I 2024-09-19 08:40:33.809 ServerApp] jupyter_server_terminals | extension was successfully linked.
[I 2024-09-19 08:40:33.813 ServerApp] jupyterlab | extension was successfully linked.
[I 2024-09-19 08:40:33.820 ServerApp] notebook_shim | extension was successfully linked.
[I 2024-09-19 08:40:33.849 ServerApp] notebook_shim | extension was successfully loaded.
[I 2024-09-19 08:40:33.850 AiExtension] Configured provider allowlist: None
[I 2024-09-19 08:40:33.850 AiExtension] Configured provider blocklist: None
[I 2024-09-19 08:40:33.850 AiExtension] Configured model allowlist: None
[I 2024-09-19 08:40:33.850 AiExtension] Configured model blocklist: None
[I 2024-09-19 08:40:33.850 AiExtension] Configured model parameters: {}
[I 2024-09-19 08:40:33.940 AiExtension] Registered model provider `ai21`.

<-- Output Omitted -->

[I 2024-09-19 08:40:34.069 ServerApp] Serving notebooks from local directory: /home/student/jupyter
[I 2024-09-19 08:40:34.069 ServerApp] Jupyter Server 2.14.2 is running at:
[I 2024-09-19 08:40:34.069 ServerApp] http://localhost:8888/lab?token=6f244b92863e2739a45145eddc677c5e99d57b4a9f4f920d
[I 2024-09-19 08:40:34.069 ServerApp]     http://127.0.0.1:8888/lab?token=6f244b92863e2739a45145eddc677c5e99d57b4a9f4f920d
[I 2024-09-19 08:40:34.069 ServerApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 2024-09-19 08:40:34.103 ServerApp] 
    
    To access the server, open this file in a browser:
        file:///home/student/.local/share/jupyter/runtime/jpserver-5842-open.html
    Or copy and paste one of these URLs:
        http://localhost:8888/lab?token=6f244b92863e2739a45145eddc677c5e99d57b4a9f4f920d

Note

The Jupyter Lab server should automatically start your default browser, Google Chromium in this case, and open a tab where you can see the Jupyter Lab landing page. If, for some reason, this automatic process fails, simply copy-paste the URL provided in your output into the Google Chromium browser on the jump host.

Notice how the Jupyter Lab landing page is divided in two. On the left side you can see the file and folder explorer, where you can browse and manage your files and folders. On the right, you can see the Launcher tab where you can select to create a Notebook file, or any other supported file format.

Keep in mind that Jupyter Lab sees your /home/student/jupyter folder as its root folder, denoted simply as / within the Jupyter Lab's file explorer.

Step 3

Click Python 3 (ipykernel) in the Notebook section, which is located at the top of the Launcher tab, to create a new Notebook file.

Answer

You should see a new file named Untitled.ipynb appearing in the file explorer on the left. On the right, you should see a new tab labeled with the same filename.

Step 4

Click File in the main taskbar, and select Rename Notebook....

Answer

You will see a pop-up window.

Step 5

Rename the file to NexusAutomation.ipynb and click Rename.

Answer

The new filename should be visible in the file explorer, and in the editor.

Step 6

In the editor, right side of the Jupyter Lab, click inside the first cell.

Answer

You should see the cursor inside the first cell, marked by a yellow rectangle in the figure.

The default format of a cell is Code. When you select a cell, you can see the format in the main toolbar of the editor.

The format of the cell defines how it is rendered in the Notebook. A code block is meant to contain programming code. You can also set the format of a cell to Markdown or Raw. Markdown is a lightweight markup language that you can use to add formatting elements to plaintext text documents. Raw format is simply meant to display unformatted text. In the following steps, you will use markdown to create a title and descriptions of your code, which is going to be contained within cells set to Code format.

Step 7

Click Code in the main toolbar and select Markdown from the drop-down menu.

Answer

The format of the cell is now set to Markdown.

Step 8

Click in the cell and write the # Nexus Automation text.

Answer

You should end up with an unrendered cell containing the markdown code you just typed.

You can continue writing markdown text in the cell. Pressing enter will enter a new line in the same block. For a markdown cheat-sheet, visit this website: https://www.markdownguide.org/cheat-sheet/

Step 9

While still in the cell with the title, press the enter key and write the following text: Simple **Python** script based on *Netmiko* library.

Answer

Notice how the cell still displays the source markdown code.

Note

In Markdown syntax, you enclose the text you want to format as bold within double asterisks and use a single asterisk to format the text as italic.

Step 10

Run the cell with the markdown code to render it by clicking the triangle icon in the main taskbar.

Answer

You will see the rendered text where the title has larger letters in bold. The word Python is formatted in bold, whereas the word Netmiko is in italic. Notice another empty cell appearing right below the formatted text.

In the following steps, you will copy a Python script into the new cell below the rendered text.

Note

Instead of clicking the run code icon, you can also press the Ctrl+Enter keys to run the code inside the currently active cell. The activated cell is outlined with a dark blue rectangle.

Step 11

Open another terminal by clicking the terminal icon, or press the Ctrl+Alt+t keys. Change the directory to solutions and open the script using the Vim editor by running vi task1_script1.py.

student@student-vm:~$ cd solutions
student@student-vm:~/solutions$ vi task1_script1.py

Answer

You will see the content of the file in the Vi editor.

Step 12

Now type :%y+ and press the enter key to copy the entire content of the file into your clipboard.

Answer

Before you press the enter key, make sure that you see the command in the bottom-left corner.

After you press the enter key, you should see the text 28 lines yanked into "+ in the bottom-left corner of the editor, which signals a successful copy-to-clipboard operation.

Step 13

Close the editor by typing :q and press the enter key.

Answer

Again, make sure that you see the command :q in the bottom-left corner of the editor. If successful, the terminal with the editor will close.

Step 14

Paste the clipboard content into the Notebook cell by clicking in the cell and pressing the Ctrl+v keys.

Answer

You will see the cell filled with the color-coded Python code.

The comments are in the lines that start with a # and are colored light blue, and provide a high-level description of what the code does. Notice how part of the code is out of view due to the amount of space available on the display for the editor.

Step 15

Place your mouse between the file explorer and the editor until you see the icon with two arrows pointing in opposite directions.

Answer

The arrows mark the vertical delimiter between the file explorer and the editor.

Step 16

Hold the left mouse button and extend the editor to the left to increase its screen area.

Answer

Now you should be able to see all of the code and still have the filename visible in the explorer.

Step 17

Click inside the cell with the code and run it by pressing the Ctrl+Enter keys.

Answer

You should see the following output below the code:

Connected successfully!
Sending the following command: show ip interface brief

IP Interface Status for VRF "default"(1)
Interface            IP Address      Interface Status

Your Notebook now contains some markdown-formatted text, a runable code cell with your script, and the output produced by the script.

You deliberately left out a more detailed explanation of the code.

Step 18

Save your Notebook and create a checkpoint by pressing Ctrl+s keys, or by clicking the save icon in the main taskbar.

Answer

In the file explorer, you will see the Modified column update to Now for the notebook you just saved.

In the following tasks, you will learn how to utilize Jupyter AI features and use a GPT model to explain the code in greater detail and even generate new code that will help you with configuring your devices automatically.

Integrate GPT models to Jupyter Lab

Satisfied with how easy it is to use Python scripts within a Jupyter Lab IDE, you decide to enhance its capabilities with generative AI. A quick glance at the documentation reveals a plugin called Jupyter AI, that is easily installed with the pip install jupyter-ai command.

Reading the documentation, you discover that Jupyter AI plugin extends the functionality of Jupyter Lab by integrating generative AI models directly into the environment. Jupyter AI includes a native chat interface within JupyterLab, enabling real-time interaction with AI models for tasks such as code generation and problem solving. It supports a wide range of generative AI providers, including AI21, Anthropic, Cohere, Hugging Face, OpenAI, Ollama, GPT4All and more, giving users access to multiple model options to fit their needs.

You also learn that Jupyter Lab acts as a bridge between the user and an inference server that hosts the GPT model and processes queries. This setup follows a client-server architecture, where the server can be hosted either in the cloud or on-premises. To use cloud-based AI providers, you need an account with the provider and an access token, which you configure in Jupyter Lab. On-premises inference servers, such as Ollama and GPT 4All, are also supported. These platforms offer a wide range of GPT models that you can download and use completely offline. Hosting GPT models on-prem raises your interest, since it is important for maintaining control over confidential data that should not be shared with public platforms.

You decide to use the Ollama inference server. You install the Ollama server using the curl -fsSL https://ollama.com/install.sh | sh CLI command that you saw in the documentation. You remember that GPT models can run on GPUs or CPUs. Since you don't have a GPU available, you become worried whether the Ollama server will even work without a GPU and if so, will it take too much time to process the queries. You start reading the documentation regarding CPU-based inference in the Ollama server. You discover that the Ollama server is based on the llama.cpp loader which is optimized for CPU-based inference.

Confident in your choice, you begin configuring Jupyter Lab to work with the Ollama inference server.

Step 19

Open the terminal by pressing the Ctrl+Alt+t keys, or clicking the terminal icon, and issue the ollama list command to list already downloaded models.

Answer

You will see four models. The model name, in the NAME column, is formatted as name:tag. The ID denotes this specific iteration of the model, SIZE is the size on disk, and MODIFIED tells when the model was downloaded or updated. Note, that the order of appearance might differ for you and you might see different timestamps in the MODIFIED column.

student@student-vm:~$ ollama list                         
NAME                ID              SIZE      MODIFIED       
codegemma:latest    0c96700aaada    5.0 GB    3 days ago    
llama3.1:latest     42182419e950    4.7 GB    3 days ago        
mistral:instruct    f974a74358d6    4.1 GB    3 days ago        
phi3.5:latest       61819fb370a3    2.2 GB    3 days ago        

There are many different types of models that are trained on specific tasks. To use the models in a chat-like interface where you want to give them some instructions on what to do, you need models that were specifically trained to follow instructions. All three models are so-called instruct type models. In case of mistral, you can see the tag instruct, letting you know what kind of model it is. The other three models simply have the latest tag. To see additional details, you can visit the Ollama model library. For instance, the phi3.5:latest model properties are in the figure.

In the figure, you can see various parameters - 3.82B, where B stands for billions and the level of quantization - Q4_0 means 4-bit Integer format. Let's shed some light on what the number of parameters and level of quantization actually mean.

GPT models scale based on the number of parameters set during training. Models with more parameters typically have broader knowledge and can solve more complex tasks. The number of parameters determines how much system RAM or GPU memory is required and how long it takes to process a query—larger models need more memory and computation time. GPT parameters are usually stored as 32-bit floating point numbers. To estimate memory requirements in gigabytes (GB), you can check the model's disk usage, as it is roughly the same for inference. Alternatively, you can use the formula M = P × B, where P is the number of parameters (in billions), and B is the number of bytes per parameter. For example, the Phi3.5 model has 3.82 billion parameters. With 32-bit values (4 bytes per parameter), it requires around 15 GB of memory. Note that this number is just a rough estimate, since there is additional data needed for inference, besides the model parameters! For smaller models you can normally increase the number you get with the formula by 20 % to take the additional data into account: M = P x B x 1.20.

Quantization can significantly reduce memory usage and processing time by converting 32-bit values to lower-precision data types, such as 8- or 4-bit integers. This trade-off reduces memory needs at the cost of some loss in answer quality, as fewer bits capture less detail. The Phi3.5 model uses 4-bit integers, which cut the memory requirement down to around 2.2 GB (since 4 bits are half a byte). Aggressive quantization below 16 bits usually reduces answer quality, but llama.cpp allows 4-bit quantization with minimal quality loss. Model size also impacts processing time. For models larger than 13 billion parameters, using a CPU alone is impractical due to long processing times, and a GPU is needed for practical speeds. Note that it applies only to inference—training requires significantly more RAM, typically three to four times as much.

To put things into perspective, ChatGPT 3.5 has around 175B parameters. Without quantization, it would require at least 700 GB of system or GPU RAM for inference and 2100 GB for training.

Note

To download additional models, visit the Ollama model library and note the model name and corresponding tag. Use the ollama pull name:tag command to download the model. For example, to download the 2B version of gemma2, you can use the ollama pull gemma2:2b command. Note, however, that the model library is frequently updated and some models might not be relevant or available in the future.

Step 20

In Jupyter Lab, click the chat interface button in the left navigation panel.

Answer

You should see the chat interface with a message stating additional configuration is required.

Note

You can extend the chat interface by putting the mouse between the chat interface and the editor tab, hold left-click, and, drag to the right. Adjust the view as you see fit.

Step 21

Click the blue button with the cog icon in the chat interface.

Answer

You will see the configuration page.

The Language model drop-down list lets you select supported AI providers and GPT models that are used for regular inferencing tasks. These models will process your queries and generate answers. The Embedding model drop-down list contains various embedding models supported by the AI providers. These models are specially trained and designed to convert text into numeric vectors, a core component of RAG applications. The Inline completions model can be set to suggest code snippets in real time during development. It takes into account the entire Notebook as the history from which it derives code completion suggestions. Below you can also set the API keys needed for cloud AI providers, such as OpenAI (ChatGPT), Hugging Face Hub, and so on. You can also set whether pressing the enter key in the chat interface creates a new line or sends the message. Keep in mind that the default behavior is to send the message to the GPT. For this task, you will configure only the core Language model for regular inferencing tasks. Since you will be using an on-prem inferencing server, you will not use any API keys.

Step 22

Click the Completion model drop-down menu under the Language model section.

Answer

You will see a list of available AI providers and GPT models in the provider::model notation.

Step 23

Scroll down and locate the Ollama entry.

Answer

Notice how there is an asterisk after the double colon. The notation provider::* is reserved for provider::model combinations that are not hardcoded in advanced in Jupyter AI. You specify the exact model ID in the following step in these cases. You can also see that the Hugging Face Hub has the same notation.

Step 24

Click the Ollama::* entry in the list.

Answer

Notice how the Completion model is populated with the Ollama::* entry. You can also see the additional fields for Local model ID and Base API URL. The Local model ID is the name of the GPT model that you want to use. The Base API URL is the URL of the Ollama inference server that you want to use. Normally, you have to specify the URL if the Ollama server is running on a different machine or is using a nondefault port. The Ollama server is running on the same machine as Jupyter AI and was installed with the default settings, so you can leave the URL field empty and let Jupyter AI use its default settings.

Step 25

Write phi3.5:latest in the Local model ID field and click Save Changes.

Answer

The Save Changes button is in the bottom-left corner of the settings page.

You will see a green message notifying you that the settings were saved successfully.

Note

To use any Ollama model, you have to download it first via the Ollama server. You can use the ollama pull command followed by the model's name. You can find all available models in the Ollama model library. To list all models already downloaded, use the ollama list command. Use the entries in the NAME column for the Local model ID field.

Step 26

Click the return arrow icon in the top-left corner of the settings page to navigate back to the Jupyter AI chat interface.

Answer

You will see the introduction text of Jupyternaut - the chatbot, and some useful tips.

The /ask and /learn commands pertain to the built-in Retrieval-Augmented Generation (RAG) pipeline, which is an advanced feature, enabling file lookup capabilities, and will not be covered in this exercise. Let's test the selected model and see a couple of these commands in action.

Step 27

Write the Which GPT model are you? text in the prompt field.

Answer

Your chat interface should look like the one in the figure.

Step 28

Press the enter key or click the send message icon.

Answer

After around 20s, the answer will start streaming. Note that GPT models are statistical by nature, which means that there is no idempotency; entering the same prompt will not produce the exact same answer! From here on in, the answers you will get are probably going to be significantly different to the ones provided in the lab guide.

Based on the answer we got, you can see that Jupyter AI correctly recognized phi3.5:latest as the selected GPT model. It also provided an additional description about Jupyternaut's capabilities. You can see from the answer that Jupyternaut is not a regular chatbot, but a complex application based on large language models (LLMs). Jupyternauts answers can be quite verbose and take up a lot of screen space. Let's clear the current chat.

Step 29

Write a slash in the prompt field and locate the /clear command.

Answer

You will see a drop-down menu with available commands, that were also described at the welcome page a couple of steps before.

Step 30

Click the /clear command on the list to insert the command in the prompt field, and press the enter key (or click the send message icon) to clear the chat window.

Answer

The chat window should now be clear and only the command summary page should be visible below the prompt field.

The integration of AI in Jupyter Lab works as expected. This type of chatbot usage is fairly common, but the real strength of Jupyter AI lies in how AI is integrated in the Notebook itself. In the following task, you will explore an additional integration of AI within the Jupyter Lab workflow, such as code explanation, generation, and debugging.

Network Automation with Jupyter AI

With Jupyter AI configured and operational, you begin to wonder about the functionalities it offers. You want to use generative AI within Jupyter Lab to explain pieces of code in Notebook cells, generate additional code, and see how you can leverage the AI integration for debugging purposes.

Step 31

If the NetworkAutomation.ipynb notebook is not already open, you can navigate to it by clicking the folder icon to open the file explorer. If it is open, you can skip this and the following two steps.

Answer

The folder icon is located at the top of the left navigation panel.

You should see the NetworkAutomation.ipynb file in the root folder.

Step 32

Double-click the NetworkAutomation.ipynb notebook to open it.

Answer

You should see the notebook open on the right. If you have the launcher tab present, the notebook will open next to it. You can close the launcher tab. Without the launcher tab, your screen should now look like in the figure.

Step 33

Click the AI chat interface icon.

Answer

It is located at the bottom of the left navigation panel.

You will see the command summary page.

Step 34

Click inside the cell with the code to activate it.

Answer

You should see blue vertical bars next to the cells with the code and its output as well as a blue rectangle around the cell with the code.

Step 35

Right-click in the cell with the Python script.

Answer

You will see a drop-down menu.

Step 36

Move the mouse cursor on Generative AI.

Answer

You will see another drop-down menu with the options to Explain, Fix, Optimize, and Refactor code.

The Fix code option is grayed out since this works only if you select a code cell that produced an output with some error message.

Step 37

Click the Explain code (1 active cell) option.

Answer

You will see that Jupyter AI automatically generates a prompt with the instruction to explain the code and the code itself copied from the cell. It might take around 20 seconds for the GPT to start generating the response.

We got a long and verbose response. For clarity, it will be presented in multiple parts. You can see that the answer starts with a nonsensical phrase "Certain extranet!" This type of wrong response is normally defined as hallucination. Hallucination is more profound with small models since their knowledge base is quite limited. Next, it explains the line where you import the Netmiko library. The explanation continues with explaining the device parameters. The explanation of the parameters is quite good, except the part where it tries to explain the device_type parameter where the switch OS is defined. This explanation is offered inside a code snippet in the form of python comment lines. The explanation of the try-except routine is also on point.

The second part of the response begins with the explanation of the code responsible for sending the CLI command and retrieving the output from the switch. It ends by discussing the disconnection routine and why it is a good practice.

The small Phi3.5 with only 3 billion parameters was capable of explaining what the code does, despite some trouble with reasoning when it came to the greeting and Cisco OS definitions. All things considered, the response was factually correct, and easy to understand. You can use this approach whenever you come across a script you would like to use, but don't really understand what it does. It makes it easy to use existing code that you can augment to meet your needs.

You probably noticed that the model is quite verbose and uses a relaxed, friendly, and optimistic tone. You can instruct the GPT model to write in a certain tone and level of verbosity by recreating the prompt with these instructions. For example, you could write a prompt like this: "Explain the code. Use concise and professional language. The code: <paste the code here>". Feel free to try different instructions and experiment with the model to learn how to make it respond according to your wish.

Now let's see whether a model with only 3 billion parameters is capable of writing code based on instructions.

Note

You could get a better response by using a larger model. You can set it anytime by clicking the gear icon in the chat interface and writing mistral:instruct or llama3.1:latest in the Local model ID field, since they are pre-downloaded models in the lab environment. Do not forget to click the Save Changes button if you plan on changing the model and seeing what kind of code explanations the other models provide. You can also download other models using the ollama pull command. The jump host has 16 GB of RAM, so inspect the model card in the Ollama library to see whether the model you want to download is not larger than around 12 GB. Note also that larger models take more time to generate the response.

Step 38

Clear the chat interface by writing /clear in the prompt field and pressing the enter key (or clicking the send message icon).

Answer

You should see the command summary page again.

Step 39

Open another terminal by pressing Ctrl+Alt+t keys (or click the terminal icon), change the directory to prompts and issue the vi prompt1.txt command.

student@student-vm:~$ cd prompts
student@student-vm:~/prompts$ vi prompt1.txt

Answer

You should see the following prompt:

Write a simple Python script that will configure the ethernet 1/1 interface IP address to 172.16.0.111/24 on a Cisco Nexus v9300 switch. You can only use the Netmiko library. The switch can be accessed by SSH on 172.16.0.10. Do not provide explanations of the code.

Step 40

Copy the content to the clipboard by issuing the (don't forget the colon) :%y+ command and paste it into the chat interface by pressing Ctrl+v. Now press enter (or click the send icon) to send the prompt to the GPT model.

Answer

It will take around 20 seconds for the GPT to start writing the response. Your response will most likely be very different.

At a first glance, the script we got looks good. One glaring issue is the wrong netmask set to 255.255.255.128 instead of 255.255.255.0. In general, GPT models tend to make mistakes with precise data, such as converting a /24 CIDR notation to the regular octet format, so you always have to check and verify all the answers GPT models provide, regardless of how big the model you are using is! This kind of prompting, where you describe a task without much detail, is called one-shot prompting and is normally enough for simple tasks. When doing more complicated tasks, a better prompt with some examples is necessary - a technique called few-shot prompting. Before moving on, you will test the script with the proper credentials and the correct netmask.

Note

If you get a nonsensical response, simply clear the chat interface using the /clear command and start a new chat by clicking the + icon at the very top of the chat interface, left of the cog icon.

Step 41

Copy your response to the clipboard by clicking the copy icon in the code block of the response.

Answer

You will see a Copied! notification right above the icon.

Note

If you want to test the code we got instead of what the GPT model generated for you, you can copy our code from /responses/response1.py. To copy the code to the clipboard, open another terminal, issue this command: vi /responses/response1.py. Now type (do not forget the colon) :%y+ and press enter. Close the file by typing (do not forget the colon) :q and press enter.

Step 42

In the NexusAutomation.ipynb notebook, scroll down and hover your mouse below the output cell. Click the Click to add a cell button.

Answer

Step 43

Click inside the newly created cell, and paste the clipboard content inside by pressing the Ctrl+v keys.

Answer

The Notebook should now contain the pasted code as well.

Step 44

If you are using the code we got (located in /responses/response1.py), set the username variable to admin and fix the netmask from 255.255.255.128 to 255.255.255.0. If you are using your own response, augment it accordingly. Press Ctrl+s to save the changes in the notebook.

Answer

The fixed parts of the code are highlighted in the figure.

Step 45

Press the Ctrl+Enter keys to run the code.

Answer

You will see the prompt asking for the password from the getpass() function call. If you are using your own response, you might not have this function call in your script. If so, skip the next step.

Step 46

Write C!sco123 in the password field and press the enter key.

Answer

You will see the output of the code below.

The code failed. These error messages are often very complicated and hard to understand, especially if you are not well versed in programming. You can use Jupyter AI to help you explain what went wrong and how to fix it.

Note

If you have been using your own response and it worked, skip the following step, otherwise, follow the same procedure.

Step 47

Right-click in the cell with the code above the output, select Generative AI, and click the Fix Code (1 error cell) option. The output might take up to 2-3 minutes to generate.

Answer

The GPT model created a prompt based on the /fix command and the active cell.

After a minute or two, we got the following explanation of the error message.

The following figure shows the list of supported devices. Notice that you must use cisco_nxos instead of cisco_nexus for the device_type.

Let's fix the error and see if the script now works.

Note

Sometimes the smaller models can go haywire and produce nonsensical responses if the output information has too many characters. You can use the very small phi3.5 model, but if you run into problems, simply switch to a more capable model, such as llama3.1 or mistral:instruct, which are both available in your jump host. You can set it by clicking the cog icon at the top of the chat interface and writing the model's name in the Local model ID field.

Step 48

Implement the suggested corrections. If you are using our code, set the device_type variable to cisco_nxos.

Answer

Notice how the blue bars changed to orange, indicating a change in the code.

Step 49

Run the code by pressing the Ctrl+Enter keys, input C!sco123 when prompted for a password and press the enter key to continue.

Answer

You will got the following output from the switch.

Step 50

Scroll to the top of the notebook and click in the first code cell and run it by pressing the Ctrl+Enter keys.

Answer

Notice how the output now changed to show the configured IP address on the interface. Notice also that it is currently shut down, an issue the script did not address because it lacked the no shutdown command.

If you were using your own response, you can follow the same troubleshooting procedure.

Jupyter AI Magic Command

In previous tasks, you used the Jupyter AI chat interface and general GPT models, which may not be the most practical approach for coding. You notice that copy-pasting the code from the chat interface to the notebook can be cumbersome, and that frequently switching between general and coding-specific GPT models via the chat interface is also inefficient.

You noticed that Jupyter AI enables the use of %%ai magic command, which allows you to integrate GPT queries directly within the notebook and use a specific GPT model of your choice for each query. This seems like the perfect solution for the noticed inconveniences.

You dive a bit deeper into the documentation and learn that commands starting with a % are known as magic commands. These provide functionalities that extend beyond standard Python syntax within Jupyter notebooks. You realize there are two types of magic commands: line magics and cell magics. Line magics apply to the line they precede and are marked by a single %. Cell magics, on the other hand, apply to the entire cell and are denoted by %%.

You decide to test the %%ai command and see how well it works.

Step 51

Scroll down to the bottom of your notebook, create a new cell, activate it, and write %load_ext jupyter_ai_magics inside.

Answer

Your notebook should look like the one in the figure. Note, that the notebook tab is extended for better visual clarity.

The %load_ext jupyter_ai_magics command loads the necessary IPython extension, needed by the %%ai command.

Step 52

Run the cell with the %load_ext jupyter_ai_magics by pressing the Ctrl+Enter keys.

Answer

The command produces no output.

Note

If you see the following output: "The jupyter_ai_magics extension is already loaded. To reload it, use: %reload_ext jupyter_ai_magics", you can simply ignore it, since you probably run the command twice without realizing it.

Step 53

Add a new cell and copy-paste the text from /home/student/prompts/prompt2.txt into the new cell. You can use the vi /home/student/prompts/prompt2.txt command to open the vi editor and issue the (don't forget the colon) :%y+ command to copy its content to the clipboard.

Answer

This is the content in the /home/student/prompts/prompt2.txt file:

%%ai ollama:codegemma --format code
Write a Python script using netmiko to configure a Cisco Nexus device. The script should connect to the device with the IP 172.16.0.10, using the username admin and password C!sco123.
The script should create VLAN 10, configure the interface Ethernet1/2 as a Layer 2 switchport, set it as an access port assigned to VLAN 10, and enable the interface.
The script should show the output after each step.

Your notebook should look like the following.

Let's examine the command %%ai ollama:codegemma --format code. The %%ai is a cell magic command, meaning it applies to the entire cell, including the prompt text that follows. The syntax ollama:codegemma specifies the AI provider and the model to be used, in this case, the Ollama inference server running the codegemma model. This model is specifically designed to assist with coding tasks, offering improved capabilities in code generation, explanation, and debugging compared to previously used models. By default, Jupyter AI assumes that the output from a model will be in markdown format. However, using the --format parameter can change this default setting. Here, --format code ensures that the output is in Python syntax. Other supported formats include text, json, html, and math.

The prompt in this example is crafted with precision. GPT models are capable of solving tasks of varying complexity, but their effectiveness can be enhanced by reducing complex tasks into smaller problems. This process is known as prompt engineering. In this case, the first sentence of the prompt specifies the programming language and operating system. It then outlines how the script connects to the device. Next, the prompt divides a complex task into detailed step-by-step instructions. Even the most advanced GPT models require carefully constructed prompts to deliver high-quality responses. Generative AI, therefore, serves as a tool. In the hands of a skilled network engineer, it can augment the user's capabilities, but it cannot replace the engineer, as it depends on the engineer to provide meaningful instructions to the GPT model.

Step 54

Run the cell by pressing the Ctrl+Enter keys. Wait for up to a minute or two for the code to generate.

Answer

You can see the output that we got in the following figure.

After reviewing our code, you notice that almost everything is in order. This script begins by connecting to the device, and it continues by configuring a VLAN with the ID 10 and showing the output of the show vlan brief command as verification. Next, it starts configuring the Ethernet1/2 interface. However, Nexus v9300 switches default to having all interfaces in Layer 3 mode. To configure an interface as a Layer 2 access port and assign it to a VLAN, you must first enter the switchport command, which is absent in the script provided by the AI. The rest of the configuration is correct. Also, while the show interface status command displays the status of all interfaces, you only need to verify the status of the Ethernet1/2 interface. Case in point, generative AI can save you time, but it cannot take the place of an experienced network engineer's knowledge and skills. Generative AI can and will make mistakes, so it's important to always check its output. Use your knowledge and experience to spot and fix any errors it might have made.

Step 55

If you want to follow along using the example, open a new terminal (press Ctrl+Alt+t keys or click the terminal icon) and issue the following command: vi /home/student/responses/response2.py. Otherwise, skip this step and the next three steps.

Answer

You should see the code inside the Vi editor.

Step 56

Write the command (do not forget to include the colon) :%y+

Answer

The command should be visible in the bottom-left corner of the terminal window.

Step 57

Press the enter key to issue the command and copy the file content to your clipboard.

Answer

You should see the following message at the bottom of the terminal window: 28 lines yanked into "+

Step 58

If you already generated some code with AI, you could simply select the AI generated code or paste the content copied from the response2.py script by pressing the Ctrl+v keys, overwriting the AI generated code. Otherwise, you can paste the content into a new cell.

Answer

Your notebook should look similar to the one in the figure.

Step 59

If you are using our example, add the missing switchport command and change the show interface status command to show interface ethernet1/2 status. Otherwise change what you notice in your own response.

Answer

The changed lines are highlighted in the figure.

Step 60

Run the cell with the augmented code by pressing the Ctrl+Enter keys.

Answer

If you followed the example in the lab guide, you will get the following expected output.

If you used your own code and encountered any errors, simply right-click in the cell with the code, select Generative AI and click the Fix code option and see if codegemma can help you fix the error.

Using generative AI effectively involves a lot of trial and error. You are encouraged to experiment with different GPT models to learn how to create effective prompts and determine which models best meet your needs. Keep in mind that general GPT models perform adequately across many tasks but may not excel. Specialized GPT models are better suited for specific tasks. Experiment with various models to find the model that fits your requirements. For coding tasks, the CodeGemma models are a good starting point. For general text generation and simpler coding tasks, models like Llama3.1, Mistral, and Phi3.5 are effective and efficient choices.

Keep going!